阐明并准确预测分子的吸毒性和生物活性在药物设计和发现中起关键作用,并且仍然是一个开放的挑战。最近,图神经网络(GNN)在基于图的分子属性预测方面取得了显着进步。但是,当前基于图的深度学习方法忽略了分子的分层信息以及特征通道之间的关系。在这项研究中,我们提出了一个精心设计的分层信息图神经网络框架(称为hignn),用于通过利用分子图和化学合成的可见的无限元素片段来预测分子特性。此外,首先在Hignn体系结构中设计了一个插件功能的注意块,以适应消息传递阶段后自适应重新校准原子特征。广泛的实验表明,Hignn在许多具有挑战性的药物发现相关基准数据集上实现了最先进的预测性能。此外,我们设计了一种分子碎片的相似性机制,以全面研究Hignn模型在子图水平上的解释性,表明Hignn作为强大的深度学习工具可以帮助化学家和药剂师识别出设计更好分子的关键分子,以设计更好的分子,以设计出所需的更好分子。属性或功能。源代码可在https://github.com/idruglab/hignn上公开获得。
translated by 谷歌翻译
The concept of walkable urban development has gained increased attention due to its public health, economic, and environmental sustainability benefits. Unfortunately, land zoning and historic under-investment have resulted in spatial inequality in walkability and social inequality among residents. We tackle the problem of Walkability Optimization through the lens of combinatorial optimization. The task is to select locations in which additional amenities (e.g., grocery stores, schools, restaurants) can be allocated to improve resident access via walking while taking into account existing amenities and providing multiple options (e.g., for restaurants). To this end, we derive Mixed-Integer Linear Programming (MILP) and Constraint Programming (CP) models. Moreover, we show that the problem's objective function is submodular in special cases, which motivates an efficient greedy heuristic. We conduct a case study on 31 underserved neighborhoods in the City of Toronto, Canada. MILP finds the best solutions in most scenarios but does not scale well with network size. The greedy algorithm scales well and finds near-optimal solutions. Our empirical evaluation shows that neighbourhoods with low walkability have a great potential for transformation into pedestrian-friendly neighbourhoods by strategically placing new amenities. Allocating 3 additional grocery stores, schools, and restaurants can improve the "WalkScore" by more than 50 points (on a scale of 100) for 4 neighbourhoods and reduce the walking distances to amenities for 75% of all residential locations to 10 minutes for all amenity types. Our code and paper appendix are available at https://github.com/khalil-research/walkability.
translated by 谷歌翻译
We present a lightweight post-processing method to refine the semantic segmentation results of point cloud sequences. Most existing methods usually segment frame by frame and encounter the inherent ambiguity of the problem: based on a measurement in a single frame, labels are sometimes difficult to predict even for humans. To remedy this problem, we propose to explicitly train a network to refine these results predicted by an existing segmentation method. The network, which we call the P2Net, learns the consistency constraints between coincident points from consecutive frames after registration. We evaluate the proposed post-processing method both qualitatively and quantitatively on the SemanticKITTI dataset that consists of real outdoor scenes. The effectiveness of the proposed method is validated by comparing the results predicted by two representative networks with and without the refinement by the post-processing network. Specifically, qualitative visualization validates the key idea that labels of the points that are difficult to predict can be corrected with P2Net. Quantitatively, overall mIoU is improved from 10.5% to 11.7% for PointNet [1] and from 10.8% to 15.9% for PointNet++ [2].
translated by 谷歌翻译
Mobile stereo-matching systems have become an important part of many applications, such as automated-driving vehicles and autonomous robots. Accurate stereo-matching methods usually lead to high computational complexity; however, mobile platforms have only limited hardware resources to keep their power consumption low; this makes it difficult to maintain both an acceptable processing speed and accuracy on mobile platforms. To resolve this trade-off, we herein propose a novel acceleration approach for the well-known zero-means normalized cross correlation (ZNCC) matching cost calculation algorithm on a Jetson Tx2 embedded GPU. In our method for accelerating ZNCC, target images are scanned in a zigzag fashion to efficiently reuse one pixel's computation for its neighboring pixels; this reduces the amount of data transmission and increases the utilization of on-chip registers, thus increasing the processing speed. As a result, our method is 2X faster than the traditional image scanning method, and 26% faster than the latest NCC method. By combining this technique with the domain transformation (DT) algorithm, our system show real-time processing speed of 32 fps, on a Jetson Tx2 GPU for 1,280x384 pixel images with a maximum disparity of 128. Additionally, the evaluation results on the KITTI 2015 benchmark show that our combined system is more accurate than the same algorithm combined with census by 7.26%, while maintaining almost the same processing speed.
translated by 谷歌翻译
Accurate mapping of forests is critical for forest management and carbon stocks monitoring. Deep learning is becoming more popular in Earth Observation (EO), however, the availability of reference data limits its potential in wide-area forest mapping. To overcome those limitations, here we introduce contrastive regression into EO based forest mapping and develop a novel semisupervised regression framework for wall-to-wall mapping of continuous forest variables. It combines supervised contrastive regression loss and semi-supervised Cross-Pseudo Regression loss. The framework is demonstrated over a boreal forest site using Copernicus Sentinel-1 and Sentinel-2 imagery for mapping forest tree height. Achieved prediction accuracies are strongly better compared to using vanilla UNet or traditional regression models, with relative RMSE of 15.1% on stand level. We expect that developed framework can be used for modeling other forest variables and EO datasets.
translated by 谷歌翻译
3D点云可以灵活地表示连续表面,可用于各种应用;但是,缺乏结构信息使点云识别具有挑战性。最近的边缘感知方法主要使用边缘信息作为描述局部结构以促进学习的额外功能。尽管这些方法表明,将边缘纳入网络设计是有益的,但它们通常缺乏解释性,使用户想知道边缘如何有所帮助。为了阐明这一问题,在这项研究中,我们提出了以可解释方式处理边缘的扩散单元(DU),同时提供了不错的改进。我们的方法可以通过三种方式解释。首先,我们从理论上表明,DU学会了执行任务呈纤维边缘的增强和抑制作用。其次,我们通过实验观察并验证边缘增强和抑制行为。第三,我们从经验上证明,这种行为有助于提高绩效。在具有挑战性的基准上进行的广泛实验验证了DU在可解释性和绩效增长方面的优势。具体而言,我们的方法使用S3DIS使用Shapenet零件和场景分割来实现对象零件分割的最新性能。我们的源代码将在https://github.com/martianxiu/diffusionunit上发布。
translated by 谷歌翻译
特洛伊木马攻击对AI系统构成了严重威胁。有关变压器模型的最新著作获得了爆炸性的流行,并且自我展示是无可争议的。这提出了一个核心问题:我们可以通过伯特和VIT中的注意力机制揭示特洛伊木马吗?在本文中,我们调查了特洛伊木马AIS中的注意力劫持模式,当存在特定的触发器时,触发令牌``绑架''的注意力重量。我们观察到来自自然语言处理(NLP)和计算机视觉(CV)域的Trojan变形金刚中劫持模式的一致性劫持模式。这种有趣的财产有助于我们了解伯特和VIT中的特洛伊木马机制。我们还提出了一个关注的特洛伊木马检测器(AHTD),以将特洛伊木马与干净的AI区分开。
translated by 谷歌翻译
基于视频的自动化手术技能评估是协助年轻的外科学员,尤其是在资源贫乏地区的一项有前途的任务。现有作品通常诉诸CNN-LSTM联合框架,该框架对LSTM的长期关系建模在空间汇总的短期CNN功能上。但是,这种做法将不可避免地忽略了空间维度中工具,组织和背景等语义概念之间的差异,从而阻碍了随后的时间关系建模。在本文中,我们提出了一个新型的技能评估框架,视频语义聚合(Visa),该框架发现了不同的语义部分,并将它们汇总在时空维度上。语义部分的明确发现提供了一种解释性的可视化,以帮助理解神经网络的决策。它还使我们能够进一步合并辅助信息,例如运动学数据,以改善表示和性能。与最新方法相比,两个数据集的实验显示了签证的竞争力。源代码可在以下网址获得:bit.ly/miccai2022visa。
translated by 谷歌翻译
最新的多视图多媒体应用程序在高分辨率(HR)视觉体验与存储或带宽约束之间挣扎。因此,本文提出了一个多视图图像超分辨率(MVISR)任务。它旨在增加从同一场景捕获的多视图图像的分辨率。一种解决方案是将图像或视频超分辨率(SR)方法应用于低分辨率(LR)输入视图结果。但是,这些方法无法处理视图之间的大角度转换,并利用所有多视图图像中的信息。为了解决这些问题,我们提出了MVSRNET,该MVSRNET使用几何信息从所有LR多视图中提取尖锐的细节,以支持LR输入视图的SR。具体而言,MVSRNET中提出的几何感知参考合成模块使用几何信息和所有多视图LR图像来合成像素对齐的HR参考图像。然后,提出的动态高频搜索网络完全利用了SR参考图像中的高频纹理细节。关于几个基准测试的广泛实验表明,我们的方法在最新方法上有了显着改善。
translated by 谷歌翻译
在立体声设置下,可以通过利用第二视图提供的其他信息来进一步改善图像JPEG伪像删除的性能。但是,将此信息纳入立体声图像jpeg trifacts删除是一个巨大的挑战,因为现有的压缩工件使像素级视图对齐变得困难。在本文中,我们提出了一个新颖的视差变压器网络(PTNET),以整合来自立体图像对的立体图像对jpeg jpeg trifacts删除的信息。具体而言,提出了精心设计的对称性双向视差变压器模块,以匹配具有不同视图之间相似纹理的特征,而不是像素级视图对齐。由于遮挡和边界的问题,提出了一个基于置信的跨视图融合模块,以实现两种视图的更好的特征融合,其中跨视图特征通过置信图加权。尤其是,我们为跨视图的互动采用粗到最新的设计,从而提高性能。全面的实验结果表明,与其他测试最新方法相比,我们的PTNET可以有效地消除压缩伪像并获得更高的性能。
translated by 谷歌翻译